Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 278
Filtrar
1.
Sci Rep ; 11(1): 24108, 2021 12 16.
Artigo em Inglês | MEDLINE | ID: mdl-34916547

RESUMO

Despite the great potential of Virtual Reality (VR) to arouse emotions, there are no VR affective databases available as it happens for pictures, videos, and sounds. In this paper, we describe the validation of ten affective interactive Virtual Environments (VEs) designed to be used in Virtual Reality. These environments are related to five emotions. The testing phase included using two different experimental setups to deliver the overall experience. The setup did not include any immersive VR technology, because of the ongoing COVID-19 pandemic, but the VEs were designed to run on stereoscopic visual displays. We collected measures related to the participants' emotional experience based on six discrete emotional categories plus neutrality and we included an assessment of the sense of presence related to the different experiences. The results showed how the scenarios can be differentiated according to the emotion aroused. Finally, the comparison between the two experimental setups demonstrated high reliability of the experience and strong adaptability of the scenarios to different contexts of use.


Assuntos
Nível de Alerta/fisiologia , COVID-19/psicologia , Bases de Dados Factuais/estatística & dados numéricos , Emoções/fisiologia , SARS-CoV-2/isolamento & purificação , Realidade Virtual , Adulto , COVID-19/epidemiologia , COVID-19/virologia , Emoções/classificação , Empatia , Feminino , Humanos , Masculino , Pandemias/prevenção & controle , Estimulação Luminosa/métodos , Reprodutibilidade dos Testes , SARS-CoV-2/fisiologia , Adulto Jovem
2.
Comput Math Methods Med ; 2021: 2520394, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34671415

RESUMO

Emotion recognition plays an important role in the field of human-computer interaction (HCI). Automatic emotion recognition based on EEG is an important topic in brain-computer interface (BCI) applications. Currently, deep learning has been widely used in the field of EEG emotion recognition and has achieved remarkable results. However, due to the cost of data collection, most EEG datasets have only a small amount of EEG data, and the sample categories are unbalanced in these datasets. These problems will make it difficult for the deep learning model to predict the emotional state. In this paper, we propose a new sample generation method using generative adversarial networks to solve the problem of EEG sample shortage and sample category imbalance. In experiments, we explore the performance of emotion recognition with the frequency band correlation and frequency band separation computational models before and after data augmentation on standard EEG-based emotion datasets. Our experimental results show that the method of generative adversarial networks for data augmentation can effectively improve the performance of emotion recognition based on the deep learning model. And we find that the frequency band correlation deep learning model is more conducive to emotion recognition.


Assuntos
Interfaces Cérebro-Computador/estatística & dados numéricos , Eletroencefalografia/estatística & dados numéricos , Emoções/fisiologia , Redes Neurais de Computação , Biologia Computacional , Bases de Dados Factuais , Aprendizado Profundo , Emoções/classificação , Humanos
3.
Comput Math Methods Med ; 2021: 9940148, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34122621

RESUMO

As one of the key issues in the field of emotional computing, emotion recognition has rich application scenarios and important research value. However, the single biometric recognition in the actual scene has the problem of low accuracy of emotion recognition classification due to its own limitations. In response to this problem, this paper combines deep neural networks to propose a deep learning-based expression-EEG bimodal fusion emotion recognition method. This method is based on the improved VGG-FACE network model to realize the rapid extraction of facial expression features and shorten the training time of the network model. The wavelet soft threshold algorithm is used to remove artifacts from EEG signals to extract high-quality EEG signal features. Then, based on the long- and short-term memory network models and the decision fusion method, the model is built and trained using the signal feature data extracted under the expression-EEG bimodality to realize the final bimodal fusion emotion classification and identification research. Finally, the proposed method is verified based on the MAHNOB-HCI data set. Experimental results show that the proposed model can achieve a high recognition accuracy of 0.89, which can increase the accuracy of 8.51% compared with the traditional LSTM model. In terms of the running time of the identification method, the proposed method can effectively be shortened by about 20 s compared with the traditional method.


Assuntos
Aprendizado Profundo , Eletroencefalografia/estatística & dados numéricos , Emoções/classificação , Emoções/fisiologia , Algoritmos , Biologia Computacional , Simulação por Computador , Bases de Dados Factuais/estatística & dados numéricos , Expressão Facial , Humanos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão
4.
PLoS One ; 16(5): e0250922, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33984002

RESUMO

BACKGROUND: Measuring implicit attitudes is difficult due to social desirability (SD). A new method, the Emotion Based Approach (EBA), can solve this by using emotions from a display of faces as response categories. We applied this on an EBA Spirituality tool (EBA-SPT) and an Actual Situation tool (EBA-AST). Our aim was to assess the structure, reliability and validity of the tools and to compare two EBA assessment approaches, i.e., an explicit one (only assessing final replies to items) and an implicit one (assessing also the selection process). METHODS: We obtained data on a sample of Czech adults (n = 522, age 30.3±12.58; 27.0% men) via an online survey; cortisol was assessed in 46 participants. We assessed the structure and psychometric properties (internal consistency and test-retest reliability; convergent, discriminant, and criterion validity) of the EBA, and examined the differences between explicit vs. implicit EBA approaches. RESULTS: We found an acceptable-good internal consistency reliability of the EBA tools, acceptable discriminant validity between them and low (neutral expression) to good (joy) test-retest reliability for concrete emotions assessed by the tools. An implicit EBA approach showed stronger correlations between emotions and weaker convergent validity, but higher criterion validity, than an explicit approach and standard questionnaires. CONCLUSION: Compared to standard questionnaires, EBA is a more reliable approach for measuring attitudes, with an implicit approach that reflects the selection process yielding the best results.


Assuntos
Emoções/classificação , Expressão Facial , Psicometria/métodos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Atitude , Face , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fotografação/métodos , Reprodutibilidade dos Testes , Inquéritos e Questionários
5.
Sci Rep ; 11(1): 5214, 2021 03 04.
Artigo em Inglês | MEDLINE | ID: mdl-33664365

RESUMO

Understanding demographic difference in facial expression of happiness has crucial implications on social communication. However, prior research on facial emotion expression has mostly focused on the effect of a single demographic factor (typically gender, race, or age), and is limited by the small image dataset collected in laboratory settings. First, we used 30,000 (4800 after pre-processing) real-world facial images from Flickr, to analyze the facial expression of happiness as indicated by the intensity level of two distinctive facial action units, the Cheek Raiser (AU6) and the Lip Corner Puller (AU12), obtained automatically via a deep learning algorithm that we developed, after training on 75,000 images. Second, we conducted a statistical analysis on the intensity level of happiness, with both the main effect and the interaction effect of three core demographic factors on AU12 and AU6. Our results show that females generally display a higher AU12 intensity than males. African Americans tend to exhibit a higher AU6 and AU12 intensity, when compared with Caucasians and Asians. The older age groups, especially the 40-69-year-old, generally display a stronger AU12 intensity than the 0-3-year-old group. Our interdisciplinary study provides a better generalization and a deeper understanding on how different gender, race and age groups express the emotion of happiness differently.


Assuntos
Emoções/classificação , Expressão Facial , Reconhecimento Facial/fisiologia , Felicidade , Adulto , Idoso , Ira/classificação , Ira/fisiologia , Bochecha/fisiologia , Aprendizado Profundo , Emoções/fisiologia , Face/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
6.
Sensors (Basel) ; 21(5)2021 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-33668254

RESUMO

Speech emotion recognition (SER) is a natural method of recognizing individual emotions in everyday life. To distribute SER models to real-world applications, some key challenges must be overcome, such as the lack of datasets tagged with emotion labels and the weak generalization of the SER model for an unseen target domain. This study proposes a multi-path and group-loss-based network (MPGLN) for SER to support multi-domain adaptation. The proposed model includes a bidirectional long short-term memory-based temporal feature generator and a transferred feature extractor from the pre-trained VGG-like audio classification model (VGGish), and it learns simultaneously based on multiple losses according to the association of emotion labels in the discrete and dimensional models. For the evaluation of the MPGLN SER as applied to multi-cultural domain datasets, the Korean Emotional Speech Database (KESD), including KESDy18 and KESDy19, is constructed, and the English-speaking Interactive Emotional Dyadic Motion Capture database (IEMOCAP) is used. The evaluation of multi-domain adaptation and domain generalization showed 3.7% and 3.5% improvements, respectively, of the F1 score when comparing the performance of MPGLN SER with a baseline SER model that uses a temporal feature generator. We show that the MPGLN SER efficiently supports multi-domain adaptation and reinforces model generalization.


Assuntos
Bases de Dados Factuais , Emoções/classificação , Aprendizado de Máquina , Reconhecimento Automatizado de Padrão , Fala , Humanos
7.
PLoS One ; 16(2): e0247131, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33600467

RESUMO

Emotion plays a significant role in interpersonal communication and also improving social life. In recent years, facial emotion recognition is highly adopted in developing human-computer interfaces (HCI) and humanoid robots. In this work, a triangulation method for extracting a novel set of geometric features is proposed to classify six emotional expressions (sadness, anger, fear, surprise, disgust, and happiness) using computer-generated markers. The subject's face is recognized by using Haar-like features. A mathematical model has been applied to positions of eight virtual markers in a defined location on the subject's face in an automated way. Five triangles are formed by manipulating eight markers' positions as an edge of each triangle. Later, these eight markers are uninterruptedly tracked by Lucas- Kanade optical flow algorithm while subjects' articulating facial expressions. The movement of the markers during facial expression directly changes the property of each triangle. The area of the triangle (AoT), Inscribed circle circumference (ICC), and the Inscribed circle area of a triangle (ICAT) are extracted as features to classify the facial emotions. These features are used to distinguish six different facial emotions using various types of machine learning algorithms. The inscribed circle area of the triangle (ICAT) feature gives a maximum mean classification rate of 98.17% using a Random Forest (RF) classifier compared to other features and classifiers in distinguishing emotional expressions.


Assuntos
Emoções/classificação , Aprendizado de Máquina , Adulto , Reconhecimento Facial , Feminino , Humanos , Masculino , Adulto Jovem
8.
IEEE Trans Image Process ; 30: 2016-2028, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33439841

RESUMO

Facial expression recognition is of significant importance in criminal investigation and digital entertainment. Under unconstrained conditions, existing expression datasets are highly class-imbalanced, and the similarity between expressions is high. Previous methods tend to improve the performance of facial expression recognition through deeper or wider network structures, resulting in increased storage and computing costs. In this paper, we propose a new adaptive supervised objective named AdaReg loss, re-weighting category importance coefficients to address this class imbalance and increasing the discrimination power of expression representations. Inspired by human beings' cognitive mode, an innovative coarse-fine (C-F) labels strategy is designed to guide the model from easy to difficult to classify highly similar representations. On this basis, we propose a novel training framework named the emotional education mechanism (EEM) to transfer knowledge, composed of a knowledgeable teacher network (KTN) and a self-taught student network (STSN). Specifically, KTN integrates the outputs of coarse and fine streams, learning expression representations from easy to difficult. Under the supervision of the pre-trained KTN and existing learning experience, STSN can maximize the potential performance and compress the original KTN. Extensive experiments on public benchmarks demonstrate that the proposed method achieves superior performance compared to current state-of-the-art frameworks with 88.07% on RAF-DB, 63.97% on AffectNet and 90.49% on FERPlus.


Assuntos
Expressão Facial , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Bases de Dados Factuais , Emoções/classificação , Face/fisiologia , Humanos
9.
IEEE Trans Vis Comput Graph ; 27(7): 3168-3181, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-31902765

RESUMO

Analyzing students' emotions from classroom videos can help both teachers and parents quickly know the engagement of students in class. The availability of high-definition cameras creates opportunities to record class scenes. However, watching videos is time-consuming, and it is challenging to gain a quick overview of the emotion distribution and find abnormal emotions. In this article, we propose EmotionCues, a visual analytics system to easily analyze classroom videos from the perspective of emotion summary and detailed analysis, which integrates emotion recognition algorithms with visualizations. It consists of three coordinated views: a summary view depicting the overall emotions and their dynamic evolution, a character view presenting the detailed emotion status of an individual, and a video view enhancing the video analysis with further details. Considering the possible inaccuracy of emotion recognition, we also explore several factors affecting the emotion analysis, such as face size and occlusion. They provide hints for inferring the possible inaccuracy and the corresponding reasons. Two use cases and interviews with end users and domain experts are conducted to show that the proposed system could be useful and effective for analyzing emotions in the classroom videos.


Assuntos
Emoções/classificação , Expressão Facial , Processamento de Imagem Assistida por Computador/métodos , Instituições Acadêmicas , Gravação em Vídeo/métodos , Algoritmos , Criança , Humanos , Estudantes
10.
IEEE/ACM Trans Comput Biol Bioinform ; 18(5): 1710-1721, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-32833640

RESUMO

Affective computing is one of the key technologies to achieve advanced brain-machine interfacing. It is increasingly concerning research orientation in the field of artificial intelligence. Emotion recognition is closely related to affective computing. Although emotion recognition based on electroencephalogram (EEG) has attracted more and more attention at home and abroad, subject-independent emotion recognition still faces enormous challenges. We proposed a subject-independent emotion recognition algorithm based on dynamic empirical convolutional neural network (DECNN) in view of the challenges. Combining the advantages of empirical mode decomposition (EMD) and differential entropy (DE), we proposed a dynamic differential entropy (DDE) algorithm to extract the features of EEG signals. After that, the extracted DDE features were classified by convolutional neural networks (CNN). Finally, the proposed algorithm is verified on SJTU Emotion EEG Dataset (SEED). In addition, we discuss the brain area closely related to emotion and design the best profile of electrode placements to reduce the calculation and complexity. Experimental results show that the accuracy of this algorithm is 3.53 percent higher than that of the state-of-the-art emotion recognition methods. What's more, we studied the key electrodes for EEG emotion recognition, which is of guiding significance for the development of wearable EEG devices.


Assuntos
Eletroencefalografia , Emoções/classificação , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Algoritmos , Encéfalo/fisiologia , Entropia , Feminino , Humanos , Masculino
11.
Sensors (Basel) ; 20(24)2020 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-33339334

RESUMO

As the number of patients with Alzheimer's disease (AD) increases, the effort needed to care for these patients increases as well. At the same time, advances in information and sensor technologies have reduced caring costs, providing a potential pathway for developing healthcare services for AD patients. For instance, if a virtual reality (VR) system can provide emotion-adaptive content, the time that AD patients spend interacting with VR content is expected to be extended, allowing caregivers to focus on other tasks. As the first step towards this goal, in this study, we develop a classification model that detects AD patients' emotions (e.g., happy, peaceful, or bored). We first collected electroencephalography (EEG) data from 30 Korean female AD patients who watched emotion-evoking videos at a medical rehabilitation center. We applied conventional machine learning algorithms, such as a multilayer perceptron (MLP) and support vector machine, along with deep learning models of recurrent neural network (RNN) architectures. The best performance was obtained from MLP, which achieved an average accuracy of 70.97%; the RNN model's accuracy reached only 48.18%. Our study results open a new stream of research in the field of EEG-based emotion detection for patients with neurological disorders.


Assuntos
Doença de Alzheimer , Eletroencefalografia , Emoções/classificação , Aprendizado de Máquina , Redes Neurais de Computação , Doença de Alzheimer/diagnóstico , Feminino , Humanos
12.
Suma psicol ; 27(2): 80-87, jul.-dic. 2020. tab, graf
Artigo em Espanhol | LILACS, Index Psicologia - Periódicos, COLNAL | ID: biblio-1145117

RESUMO

Resumen La valoración es una etapa crucial del procesamiento emocional que prepara para la acción. Durante este proceso se generan distintas respuestas a partir de la evaluación de aspectos emocionales de los estímulos. Estas variaciones pueden deberse a la influencia de características individuales. La literatura señala al temperamento como uno de los factores asociados a las diferencias en la valoración emocional y el afrontamiento. Este trabajo analiza la relación entre la valoración emocional de estímulos visuales y características temperamentales obtenidas por medio del Cuestionario de Conducta Infantil (CBQ). Para esto, 198 preescolares de cuatro y cinco años valoraron 15 imágenes (negativas, neutras y positivas) y se analizaron estas valoraciones en función de las características temperamentales. Se encontró mayor cantidad de valoraciones negativas a los cuatro años que a los cinco (p = .056, η² parcial = .031), y de valoraciones positivas en el grupo con puntaje alto de esfuerzo de control en comparación con el de puntaje bajo (p = .020, η² parcial = .029). Esto sugiere una asociación entre la valoración emocional, la edad y el esfuerzo de control. Este resultado podría deberse a que las niñas y los niños estuvieran desviando su atención de los aspectos negativos de los estímulos.


Abstract Emotional appraisal is a crucial stage of emotional processing that prepares for action (coping). During this process different responses are generated from the evaluation of emotional aspects of the stimuli. These variations may be due to the influence of individual characteristics. The literature points to temperament as one of the factors associated with differences in emotional appraisal and coping. This paper analyzes the relationship between the emotional appraisal of visual stimuli and temperamental characteristics, obtained through the Children's Behavior Questionnaire (CBQ). For this purpose, 198 preschoolers aged 4 and 5 assigned of three possible emotional expressions to 15 images (negative, neutral and positive) and then compared the number of appraisals according to temperamental characteristics. Higher number of negative appraisals were found in the 4-year group compared to the 5-year group (p = .056, η² partial = .031), and of positive appraisals in the group with high effortful control score compared to the low score group (p = .020, η² partial = .029), suggesting an association between emotional assessment, age and effortful control. This result could be due to the fact that children were diverting they attention from the negative aspects of stimuli.


Assuntos
Humanos , Masculino , Feminino , Pré-Escolar , Pré-Escolar , Emoções/classificação , Temperamento , Individualidade
13.
PLoS Comput Biol ; 16(10): e1008335, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-33112846

RESUMO

Facial expressions carry key information about an individual's emotional state. Research into the perception of facial emotions typically employs static images of a small number of artificially posed expressions taken under tightly controlled experimental conditions. However, such approaches risk missing potentially important facial signals and within-person variability in expressions. The extent to which patterns of emotional variance in such images resemble more natural ambient facial expressions remains unclear. Here we advance a novel protocol for eliciting natural expressions from dynamic faces, using a dimension of emotional valence as a test case. Subjects were video recorded while delivering either positive or negative news to camera, but were not instructed to deliberately or artificially pose any specific expressions or actions. A PCA-based active appearance model was used to capture the key dimensions of facial variance across frames. Linear discriminant analysis distinguished facial change determined by the emotional valence of the message, and this also generalised across subjects. By sampling along the discriminant dimension, and back-projecting into the image space, we extracted a behaviourally interpretable dimension of emotional valence. This dimension highlighted changes commonly represented in traditional face stimuli such as variation in the internal features of the face, but also key postural changes that would typically be controlled away such as a dipping versus raising of the head posture from negative to positive valences. These results highlight the importance of natural patterns of facial behaviour in emotional expressions, and demonstrate the efficacy of using data-driven approaches to study the representation of these cues by the perceptual system. The protocol and model described here could be readily extended to other emotional and non-emotional dimensions of facial variance.


Assuntos
Emoções/classificação , Expressão Facial , Processamento de Imagem Assistida por Computador/métodos , Adulto , Algoritmos , Face/anatomia & histologia , Face/fisiologia , Feminino , Humanos , Masculino , Gravação em Vídeo
14.
Sci Rep ; 10(1): 16035, 2020 09 29.
Artigo em Inglês | MEDLINE | ID: mdl-32994423

RESUMO

A high proportion of pet dogs show fear-related behavioural problems, with noise fears being most prevalent. Nonetheless, few studies have objectively evaluated fear expression in this species. Using owner-provided video recordings, we coded behavioural expressions of pet dogs during a real-life firework situation at New Year's Eve and compared them to behaviour of the same dogs on a different evening without fireworks (control condition), using Wilcoxon signed ranks tests. A backwards-directed ear position, measured at the base of the ear, was most strongly associated with the fireworks condition (effect size: Cohen's d = 0.69). Durations of locomotion (d = 0.54) and panting (d = 0.45) were also higher during fireworks than during the control condition. Vocalisations (d = 0.40), blinking (d = 0.37), and hiding (d = 0.37) were increased during fireworks, but this was not significant after sequential Bonferroni correction. This could possibly be attributed to the high inter-individual variability in the frequency of blinking and the majority of subjects not vocalising or hiding at all. Thus, individual differences must be taken into account when aiming to assess an individual's level of fear, as relevant measures may not be the same for all individuals. Firework exposure was not associated with an elevated rate of other so-called 'stress signals', lip licking and yawning.


Assuntos
Comportamento Animal/fisiologia , Medo/classificação , Ruído/efeitos adversos , Animais , Cães , Emoções/classificação , Explosões , Substâncias Explosivas/efeitos adversos , Feminino , Masculino , Gravação de Videoteipe
15.
Rev. psiquiatr. salud ment. (Barc., Ed. impr.) ; 13(3): 140-149, jul.-sept. 2020. tab, graf
Artigo em Espanhol | IBECS | ID: ibc-199845

RESUMO

INTRODUCCIÓN: El reconocimiento de las expresiones faciales (REF) es un componente fundamental en la interacción social. Sabemos que dicho REF se encuentra alterado tanto en los pacientes con trastorno mental grave (TMG) como en los que padecen antecedentes de trauma infantil. MATERIAL Y MÉTODOS: Pretendemos analizar la posible relación entre la existencia de trauma en la infancia más allá de la presencia de un TMG, medido mediante la escala CTQ y el reconocimiento de las expresiones faciales, en una muestra con tres tipos de sujetos (n=321): controles sanos (n=179), pacientes con TLP (n=69) y primeros episodios psicóticos (n=73). Así mismo, se recogieron variables clínicas y datos sociodemográficos. Se analizó dicha relación mediante una técnica de regresión multivariante ajustando por el sexo, la edad, el CI, el consumo actual de tóxicos y el grupo al que pertenece el sujeto. RESULTADOS: El trauma sexual y/o físico en la infancia se relacionó de forma independiente de la existencia de TMG con un peor ratio de REF total, además de con una peor tasa de reconocimiento en las expresiones de felicidad. Además, los sujetos con antecedentes de trauma en la infancia atribuyeron con mayor frecuencia expresiones de enfado y miedo a las caras neutras y felices, independientemente de otras variables. CONCLUSIONES: La existencia de trauma en la infancia parece influir de manera independiente al TMG en la capacidad de los sujetos de reconocer expresiones faciales. Dado que el trauma es un factor prevenible y con un tratamiento específico, se debería prestar atención a la existencia de este antecedente en las poblaciones clínicas


INTRODUCTION: Facial emotion recognition (FER) is a fundamental component in social interaction. We know that FER is disturbed in patients with severe mental disorder (SMD), as well as those with a history of childhood trauma. MATERIAL AND METHODS: We intend to analyze the possible relationship between the existence of trauma in childhood irrespective of a SMD, measured by the CTQ scale and facial expression recognition, in a sample of three types of subjects (n=321): healthy controls (n=179), patients with BPD (n=69) and patients with a first psychotic episode (n=73). Likewise, clinical and socio-demographic data were collected. The relationship was analyzed by a technique of multivariate regression adjusting for sex, age, IQ, current consumption of drugs and group to which the subject belonged. RESULTS: Sexual and/or physical trauma in childhood related independently to the existence of SMD with a worse total FER ratio, as well as to a worse rate of recognition in expressions of happiness. Furthermore, the subjects with a history of childhood trauma attributed expressions of anger and fear more frequently to neutral and happy faces, irrespective of other variables. CONCLUSIONS: The existence of trauma in childhood seems to influence the ability of subjects to recognize facial expressions, irrespective of SMD. Trauma is a preventable factor with specific treatment; therefore, attention should be paid to the existence of this background in clinical populations


Assuntos
Humanos , Masculino , Feminino , Adolescente , Adulto Jovem , Adulto , Pessoa de Meia-Idade , Idoso , Trauma Psicológico/psicologia , Emoções/classificação , Expressão Facial , Transtornos Mentais/psicologia , Reconhecimento Facial , Sobreviventes Adultos de Maus-Tratos Infantis/psicologia , Abuso Sexual na Infância/psicologia , Maus-Tratos Infantis/psicologia , Transtorno da Personalidade Antissocial/psicologia , Transtornos Relacionados ao Uso de Substâncias/psicologia , Testes Psicológicos/estatística & dados numéricos , Psicometria/métodos , Estudos de Casos e Controles
16.
Comput Math Methods Med ; 2020: 8303465, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32831902

RESUMO

Human emotion recognition has been a major field of research in the last decades owing to its noteworthy academic and industrial applications. However, most of the state-of-the-art methods identified emotions after analyzing facial images. Emotion recognition using electroencephalogram (EEG) signals has got less attention. However, the advantage of using EEG signals is that it can capture real emotion. However, very few EEG signals databases are publicly available for affective computing. In this work, we present a database consisting of EEG signals of 44 volunteers. Twenty-three out of forty-four are females. A 32 channels CLARITY EEG traveler sensor is used to record four emotional states namely, happy, fear, sad, and neutral of subjects by showing 12 videos. So, 3 video files are devoted to each emotion. Participants are mapped with the emotion that they had felt after watching each video. The recorded EEG signals are considered further to classify four types of emotions based on discrete wavelet transform and extreme learning machine (ELM) for reporting the initial benchmark classification performance. The ELM algorithm is used for channel selection followed by subband selection. The proposed method performs the best when features are captured from the gamma subband of the FP1-F7 channel with 94.72% accuracy. The presented database would be available to the researchers for affective recognition applications.


Assuntos
Algoritmos , Eletroencefalografia/métodos , Emoções/classificação , Benchmarking , Encéfalo/anatomia & histologia , Encéfalo/fisiologia , Ondas Encefálicas/fisiologia , Biologia Computacional , Bases de Dados Factuais , Eletroencefalografia/estatística & dados numéricos , Emoções/fisiologia , Feminino , Humanos , Aprendizado de Máquina , Masculino , Conceitos Matemáticos , Redes Neurais de Computação , Estimulação Luminosa , Gravação em Vídeo
17.
IEEE Trans Biomed Circuits Syst ; 14(4): 838-851, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32746354

RESUMO

Chronic neurological disorders (CND's) are lifelong diseases and cannot be eradicated, but their severe effects can be alleviated by early preemptive measures. CND's, such as Alzheimer's, Autism Spectrum Disorder (ASD), and Amyotrophic Lateral Sclerosis (ALS), are the chronic ailment of the central nervous system that causes the degradation of emotional and cognitive abilities. Long term continuous monitoring with neuro-feedback of human emotions for patients with CND's is crucial in mitigating its harmful effect. This paper presents hardware efficient and dedicated human emotion classification processor for CND's. Scalp EEG is used for the emotion's classification using the valence and arousal scales. A linear support vector machine classifier is used with power spectral density, logarithmic interhemispheric power spectral ratio, and the interhemispheric power spectral difference of eight EEG channel locations suitable for a wearable non-invasive classification system. A look-up-table based logarithmic division unit (LDU) is to represent the division features in machine learning (ML) applications. The implemented LDU minimizes the cost of integer division by 34% for ML applications. The implemented emotion's classification processor achieved an accuracy of 72.96% and 73.14%, respectively, for the valence and arousal classification on multiple publicly available datasets. The 2 x 3mm2 processor is fabricated using a 0.18 µm 1P6M CMOS process with power and energy utilization of 2.04 mW and 16 µJ/classification, respectively, for 8-channel operation.


Assuntos
Eletroencefalografia , Emoções/classificação , Monitorização Fisiológica , Doenças do Sistema Nervoso , Processamento de Sinais Assistido por Computador/instrumentação , Nível de Alerta/fisiologia , Transtorno do Espectro Autista/psicologia , Transtorno do Espectro Autista/reabilitação , Doença Crônica , Eletroencefalografia/instrumentação , Eletroencefalografia/métodos , Desenho de Equipamento , Humanos , Dispositivos Lab-On-A-Chip , Aprendizado de Máquina , Masculino , Monitorização Fisiológica/instrumentação , Monitorização Fisiológica/métodos , Doenças do Sistema Nervoso/psicologia , Doenças do Sistema Nervoso/reabilitação , Doenças do Sistema Nervoso/terapia , Máquina de Vetores de Suporte
18.
Medicine (Baltimore) ; 99(29): e21154, 2020 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-32702870

RESUMO

BACKGROUND: Traumatic brain injury (TBI) refers to head injuries that disrupt normal function of the brain. TBI commonly lead to a wide range of potential psychosocial functional deficits. Although psychosocial function after TBI is influenced by many factors, more and more evidence shows that social cognitive skills are critical contributors. Facial emotion recognition, one of the higher-level skills of social cognition, is the ability to perceive and recognize emotional states of others based on their facial expressions. Numerous studies have assessed facial emotion recognition performance in adult patients with TBI. However, there have been inconsistent findings. The aim of this study is to conduct a meta-analysis to characterize facial emotion recognition in adult patients with TBI. METHODS: A systematic literature search will be performed for eligible studies published up to March 19, 2020 in three international databases (PubMed, Web of Science and Embase). The work such as article retrieval, screening, quality evaluation, data collection will be conducted by two independent researchers. Meta-analysis will be conducted using Stata 15.0 software. RESULTS: This meta-analysis will provide a high-quality synthesis from existing evidence for facial emotion recognition in adult patients with TBI, and analyze the facial emotion recognition performance in different aspects (i.e., recognition of negative emotions or positive emotions or any specific basic emotion). CONCLUSIONS: This meta-analysis will provide evidence of facial emotion recognition performance in adult patients with TBI. INPLASY REGISTRATION NUMBER: INPLASY202050109.


Assuntos
Lesões Encefálicas Traumáticas/psicologia , Protocolos Clínicos , Emoções/classificação , Reconhecimento Facial , Adulto , Lesões Encefálicas Traumáticas/classificação , Expressão Facial , Humanos , Metanálise como Assunto , Revisões Sistemáticas como Assunto
19.
Psicológica (Valencia. Internet) ; 41(2): 84-102, jul. 2020. graf
Artigo em Inglês | IBECS | ID: ibc-199981

RESUMO

The proliferation of fake news in internet requires understanding which factors modulate their credibility and take actions to limit their impact. A number of recent studies have shown an effect of the foreign language when making decisions: reading in a foreign language engages a more rational, analytic mode of thinking (Costa et al., 2014, Cognition). This analytic mode of processing may lead to a decrease in the credibility of fake news. Here we conducted two experiments to examine whether fake news stories presented to university students were more credible in the native language than in a foreign language. Bayesian analyses in both experiments offered support for the hypothesis that the credibility of fake news is not modulated by language. Critically, Experiment 2 also showed a strong direct relationship between credibility and negative emotionality regardless of language. This pattern suggests that the driving force behind the engagement in an automatic thinking mode when reading fake news is not language (native vs. foreign) but emotionality


La proliferación de noticias falseadas en internet requiere entender qué factores modulan su credibilidad y tomar medidas para limitar su impacto. Una serie de estudios recientes han demostrado un efecto del idioma extranjero en la toma de decisiones: la lectura en un idioma extranjero implica un modo de pensar más racional y analítico (Costa et al., 2014, Cognition). Este modo analítico de procesamiento puede originar una disminución de la credibilidad de las noticias falseadas. En el presente estudio, se realizaron dos experimentos para examinar si las noticias falseadas presentadas a los estudiantes universitarios eran más creíbles en el idioma nativo que en un idioma extranjero. Los análisis bayesianos en ambos experimentos apoyaron la hipótesis de que la credibilidad de las noticias falseadas no se encuentra modulada por la lengua. Por otra parte, en el Experimento 2 se encontró una fuerte relación directa entre la credibilidad y la emotividad negativa sin importar el idioma. Este patrón sugiere que lo que induce la activación del pensamiento automático a la hora de leer noticias falseadas es el grado de emocionalidad, y no el idioma en el que están escritas


Assuntos
Humanos , Masculino , Feminino , Adolescente , Adulto Jovem , Idioma , Emoções/classificação , Falsidade Ideológica , Fraude , Webcasts como Assunto , Detecção de Mentiras/psicologia , Comportamento de Busca de Informação/classificação , Disseminação de Informação
20.
Psicológica (Valencia. Internet) ; 41(2): 162-182, jul. 2020. tab, graf
Artigo em Inglês | IBECS | ID: ibc-199984

RESUMO

Schizotypy is defined as a combination of traits qualitatively similar to those found in schizophrenia, though in a minor severity, that can be found in the nonclinical population. Some studies suggest that people with schizotypal traits have problems recognising emotional facial expressions. In this research, we further explore this issue and we investigate, for the first time, whether the differential outcomes procedure (DOP) may improve the recognition of emotional facial expressions. Participants in our study were students that completed the ESQUIZO-Q-A and were set in two groups, high schizotypy (HS) and low schizotypy (LS). Then, they performed a task in which they had to recognise the emotional facial expression of a set of faces. Participants of the HS group and the LS group did not differ in their performance. Importantly, all participants showed better recognition of emotional facial expressions when they were trained with differential outcomes. This novel finding might be relevant for clinical practice since the DOP is shown as a tool that may improve the recognition of emotional facial expressions


No disponible


Assuntos
Humanos , Masculino , Feminino , Adolescente , Adulto Jovem , Adulto , Transtorno da Personalidade Esquizotípica/psicologia , Expressão Facial , Reconhecimento Facial , Emoções/classificação , Sintomas Afetivos/psicologia , Psicologia do Esquizofrênico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...